Measuring Gaze Orientation for Human-Robot Interaction
نویسندگان
چکیده
In the context of Human-Robot interaction estimating gaze orientation brings useful information about human focus of attention. This is a contextual information : when you point something you usually look at it. Estimating gaze orientation requires head pose estimation. There are several techniques to estimate head pose from images, they are mainly based on training [3, 4] or on local face features tracking [6]. The approach described here is based on local face features tracking in image space using online learning, it is a mixed approach since we track face features using some learning at feature level. It uses SURF features [2] to guide detection and tracking. Such key features can be matched between images, used for object detection or object tracking [10]. Several approaches work on fixed size images like training techniques which mainly work on low resolution images because of computation costs whereas approaches based on local features tracking work on high resolution images. Tracking face features such as eyes, nose and mouth is a common problem in many applications such as detection of facial expression or video conferencing [8] but most of those applications focus on front face images [9]. We developed an algorithm based on face features tracking using a parametric model. First we need face detection, then we detect face features in following order: eyes, mouth, nose. In order to achieve full profile detection we use sets of SURF to learn what eyes, mouth and nose look like once tracking is initialized. Once those sets of SURF are known they are used to detect and track face features. SURF have a descriptor which is often used to identify a key point and here we add some global geometry information by using the relative position between key points. Then we use a particle filter to track face features using those SURF based detectors, we compute the head pose angles from features position and pass the results through a median filter. This paper is organized as follows. Section 2 describes our modeling of visual features, section 3 presents our tracking implementation. Section 4 presents results we get with our implementation and future works in section 5.
منابع مشابه
Robot gaze does not reflexively cue human attention
Joint visual attention is a critical aspect of typical human interactions. Psychophysics experiments indicate that people exhibit strong reflexive attention shifts in the direction of another person’s gaze, but not in the direction of non-social cues such as arrows. In this experiment, we ask whether robot gaze elicits the same reflexive cueing effect as human gaze. We consider two robots, Zeno...
متن کاملAn Experimental Study on Blinking and Eye Movement Detection via EEG Signals for Human-Robot Interaction Purposes Based on a Spherical 2-DOF Parallel Robot
Blinking and eye movement are one of the most important abilities that most people have, even people with spinal cord problem. By using this ability these people could handle some of their activities such as moving their wheelchair without the help of others. One of the most important fields in Human-Robot Interaction is the development of artificial limbs working with brain signals. The purpos...
متن کاملTracking Gaze and Visual Focus of Attention of People Involved in Social Interaction
The visual focus of attention (VFOA) has been recognized as a prominent conversational cue. We are interested in the VFOA tracking of a group of people involved in social interaction. We note that in this case the participants look either at each other or at an object of interest; therefore they don’t always face a camera and, consequently, their gazes (and their VFOAs) cannot be based on eye d...
متن کاملDesigning Gaze Behavior for Humanlike Robots
Humanlike robots are designed to communicate with people using human verbal and nonverbal language. Social gaze cues play an important role in this communication. Although research in human-robot interaction has shown that people understand these gaze cues and interpret them as valid signals for human communication, whether they can serve as effective communicative mechanisms and lead to signif...
متن کاملDoes Robot Eye-gaze Help Humans Identify Objects?
The success of a social robot relies on its interactions with humans. To enhance human-robot interaction I looked toward useful forms of communication in human-human interaction. Humans communicate through verbal and non-verbal cues, in particular, my study focuses on eye gaze and the effects of its application on human-robot interaction. Specifically, I wanted to know whether having a robot ex...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2009